2 research outputs found
Robot Rights? Let's Talk about Human Welfare Instead
The 'robot rights' debate, and its related question of 'robot
responsibility', invokes some of the most polarized positions in AI ethics.
While some advocate for granting robots rights on a par with human beings,
others, in a stark opposition argue that robots are not deserving of rights but
are objects that should be our slaves. Grounded in post-Cartesian philosophical
foundations, we argue not just to deny robots 'rights', but to deny that
robots, as artifacts emerging out of and mediating human being, are the kinds
of things that could be granted rights in the first place. Once we see robots
as mediators of human being, we can understand how the `robots rights' debate
is focused on first world problems, at the expense of urgent ethical concerns,
such as machine bias, machine elicited human labour exploitation, and erosion
of privacy all impacting society's least privileged individuals. We conclude
that, if human being is our starting point and human welfare is the primary
concern, the negative impacts emerging from machinic systems, as well as the
lack of taking responsibility by people designing, selling and deploying such
machines, remains the most pressing ethical discussion in AI.Comment: Accepted to the AIES 2020 conference in New York, February 2020. The
final version of this paper will appear in Proceedings of the 2020 AAAI/ACM
Conference on AI, Ethics, and Societ
S(C)ENTINEL - monitoring automated vehicles with olfactory reliability displays
Overreliance in technology is safety-critical and it is assumed that this could have been a main cause of severe accidents with automated vehicles. To ease the complex task of per- manently monitoring vehicle behavior in the driving en- vironment, researchers have proposed to implement relia- bility/uncertainty displays. Such displays allow to estimate whether or not an upcoming intervention is likely. However, presenting uncertainty just adds more visual workload on drivers, who might also be engaged in secondary tasks. We suggest to use olfactory displays as a potential solution to communicate system uncertainty and conducted a user study (N=25) in a high-fidelity driving simulator. Results of the ex- periment (conditions: no reliability display, purely visual reliability display, and visual-olfactory reliability display) comping both objective (task performance) and subjective (technology acceptance model, trust scales, semi-structured interviews) measures suggest that olfactory notifications could become a valuable extension for calibrating trust in automated vehicles